method ŧ

of single process liftime management is: all piece exists in store some may be in memory in the main process if one requests piece from the main process, if the piece is memory they get that reference, if the piece is in store then it is loaded in

one can send lTSD to another process, it is ensured that any required store-s are sent over too if one saves data in a secondary process then the data in all other processes should be updated to maintain consistency as in memory information is always more valid than that of the store if the other stores were not to be notified then they would hold data that was behind the store which is unnaceptable for method ŧ

another way to deal with this situation would be to have no other process store-s save data but the primary, when a secondary process wishes to save data it must communicate its data to the primary process store which then updates the memory information and savs the information to the store

this leaves the organisation of a pieces lifetime in a more disorganised state than when there is only a single process With a single process there is only ever at most ( a memory instance) and ( a store instance) of data, the memory instance is always the truth if present

with multiple processes there is ( a store instance) and there can then be ( a memory instance) in the primary process, when within one process this functions as before each secondary process

processing

sending lTSD across a process ( process args, pipe, queue): primary-> secondary: any store-s referenced by the lTSD are ensured to be contained within the secondary as bridging stores secondary-> secondary: any store-s referenced by the lTSD are ensured to be contained within the secondary as bridging stores

loading a piece: primary: if in memory then return that instance, otherwise load from store secondary: if in primary memory, then request and recieve that, if not then { if the store supports multi state splitting, retrieve ourselves, otherwise retrieve from primary interface}, it is not saved to any memory track in the secondary process

creating a new piece: primary: if tracked by store, saved to store in init-> _dataInit secondary: if tracked by store, sent to primary process where it is tracked in memory and saved

saving a piece: primary: the piece is saved to the store secondary: { if the store supports multi state splitting then save on the secondary, if not then the primary must save}, if the data is present in the primary process memory then send the data to the primary process where the lTSD data is replaced

retaining: works as primary with bridging deletion: triggering deletion on a bridging store also triggers deletion on primary, ensure no double deletion reload: { if multi state splitting then load in bridging, otherwise load in primary and send over}

thoughout this process, any operations that can be done in a scondary process should be done in a secondary process

method đ

this follows mark~s advice about transaction id each piece requested is loaded from the store, perhaps optional retrieval of existing version then the multiple instances of that piece in memory have a mechanism to decide which is written to the store those which are never asked to be saved are never saved to the store

it is designed around a system in which memory cannot be quickly shared as in shared memory between all software aspects it permits all aspects to operate without having to send data between each other, all aspects retrive and push data directly to the store sharing data between aspects may be desirable, that could be layered upon this system

the ( transaction id)-s must be stored in a

those that are asked to be saved are done so in a sequence when viewed on 1d time that which becomes the truth in store can be decided by its location in this sequence instances can also provide a 1d priority value which can be purely interpreted or interpreted in combination with the sequence value a 1d time value can also be extracted from the time of retrieval the number of succsessful saves can also be passed

so some combination of any of ( retrieval sequence, saving sequence, priority sequence) perhaps a custom function could even be used, which takes the data and its position in all 3 sequences and returns a boolean as to whether it should be saved knowing the length of the retrieval sequence may also be useful, the savingLocation will always be the current last in the sequence, it is not currently imagined to obtain the priority ahead of saving mark~s version would only return true if ( savingSequence== 0)

def lTSDSavingDecider( lTSD: LongTermStorageData, retrievalLocation: int, retrievalSequenceLength: int, attemptedSaveLocation: int, priorityLocation: float, deleted: bool):...
def firstRequestSaves( lTSD: LongTermStorageData, retrievalLocation: int, retrievalSequenceLength: int, attemptedSaveLocation: int, priorityLocation: float, deleted: bool):
	return savingLocation== 0

the ability to test whether a save will go through would be useful maybe this could be a central mechanism, a test can be performed to determine whether an aspect should save and then a save always performs the save once a piece is saved then all other lTSD instances across all process-s can be notified using the event system, maybe they could listen for succsessful saves, non sucsessful or either one can also recieve a boolean value as to whether a save was succsessful upon saving

the transaction states and ... should be saved to a non process location so that multiple process-s can interact with this system in the same way this suggests the utility of a default definition of the saving format as would also be useful with lTSD rep-s one can inherit from the definition and ( edit, add, remove) entries

it makes more sense in this scenario to enforce that all store interfaces must support multi state duplication, if not then a bridging store can be created the existance of store multi state duplication will not be discussed in # processing

this approach may be more condusive to multiple programming language support, and does not require a constructed primary process which requires constant communication with constructed secondary process-s the consistant requirement for new retrieval from the store may

DataEditor example

This indicates a break from current single process operation and existing method~s compatibility must be established Currently the data editor retrieves the LTSD and also produces a copy, The changes performed by the interactor are instantly performed upon the copy and when the user wishes to save, the data of the main instance is set and is then saved to the store the copy is then reconstructed

If ## method đ is applied then the data retriever can retrieve a single copy, edits are then made to that copy and then saved to the store if a save occurs during editing, then the aspect can test for a save result and can report this back to the user, if need to force then do so if dont need to force and just check then this can be presented as a forceful operation in the gui

Instances where one may want to share a reference

Of course multiple aspects may share the same reference manually by accessing one another if possible but do i wish to support retrieving an existing reference This would be trivial to implement should the need arise this may only be possible within a single process as to obtain data from an other process, the execution position of that cursor would have to be directed towards retrieval and sending of that data unless that data can be retrieved an aspect other that that process~s executor

processing

sending lTSD across a process ( process args, pipe, queue): any: it must be ensured that the lTSD~s store is established on the process, the same for all referenced data # copying all referenced data the data from the original process is sent over and the retrieval index is incremented past the maximum so maximum is obtained, new entry added

loading a piece: the piece is obtained from the store, the max transaction index is ( obtained, incremented, set for the retrieved piece)

creating a new piece: the piece is saved to the store and then a transaction index of 0 is recorded

!n testing for a piece~s save result: the save reuslt function is run in the calling process to obtain the input data ( retrievalSequenceLength, savingLocation) must be retrieved from the store

saving a piece: the save result function must be run at the time of saving, then the originating process-s interface is used to save the data a lock may be needed to prevent multiple process-s from performing a overlapping save operation

retaining: this works the same as of present where the data can be added to a dictionary upon the calling process-s store

deletion: the data should be removed from the store, all retained memory instances should be removed from the store, whether the existing transaction-s should be permitted to save and un-delete data is unknown. All should be notified and ejected from retainment but then they could either be blocked or not blocked from future saving, they could also have the data deleted from quick memory. Other processes than the originating will not be initially aware of the change and so a check for deletion must be performed upon all operations that this will affect, maybe those within the lTSD

reload: the data should be retrieved from the store using the originating instance

copying all referenced data

this must be done during the serialisation this is a good case for implementing the concept of a serialisation session in which common data is made available, this avoids passing the data around as arguments to each function the current entry points into serialisation can stay the same and initialise the session, calling the actual serialisation initialisation, then all componenrts of serialisation that wish to recursively do so can call the internal function. This allows one to effectively track the lifetime of a session